Search results for "Weighted voting"

showing 5 items of 5 documents

A Similarity Evaluation Technique for Cooperative Problem Solving with a Group of Agents

1999

Evaluation of distance or similarity is very important in cooperative problem solving with a group of agents. Distance between problems is used by agents to recognize nearest solved problems for a new problem, distance between solutions is necessary to compare and evaluate the solutions made by different agents, and distance between agents is useful to evaluate weights of the agents to be able to integrate them by weighted voting. The goal of this paper is to develop a similarity evaluation technique to be used for cooperative problem solving with a group of agents. Virtual training environment used for this goal is represented by predicates that define relationships within three sets: prob…

Computer Science::Multiagent SystemsTheoretical computer scienceSimilarity (network science)Computer scienceGroup (mathematics)business.industryStructure (category theory)Weighted votingInformation systemVirtual trainingArtificial intelligencebusinessCooperative problem solving
researchProduct

<title>Expanding context against weighted voting of classifiers</title>

2000

In the paper we propose a new method to integrate the predictions of multiple classifiers for Data Mining and Machine Learning tasks. The method assumes that each classifier stands in it's own context, and the contexts are partially ordered. The order is defined by monotonous quality function that maps each context to the value from the interval [0,1]. The classifier that has the context with better quality is supposed to predict better than the classifier from worse quality. The objective is to generate the opinion of `virtual' classifier that stands in the context with quality equal to 1. This virtual classifier must have the best accuracy of predictions due to the best context. To do thi…

business.industryComputer sciencemedia_common.quotation_subjectWeighted votingFeature selectionQuadratic classifiercomputer.software_genreMachine learningInformation extractionComputingMethodologies_PATTERNRECOGNITIONKnowledge extractionVotingMargin classifierArtificial intelligencebusinesscomputerClassifier (UML)media_commonSPIE Proceedings
researchProduct

<title>Dynamic integration of multiple data mining techniques in a knowledge discovery management system</title>

1999

One of the most important directions in improvement of data mining and knowledge discovery, is the integration of multiple classification techniques of an ensemble of classifiers. An integration technique should be able to estimate and select the most appropriate component classifiers from the ensemble. We present two variations of an advanced dynamic integration technique with two distance metrics. The technique is one variation of the stacked generalization method, with an assumption that each of the component classifiers is the best one, inside a certain sub area of the entire domain area. Our technique includes two phases: the learning phase and the application phase. During the learnin…

Computer sciencebusiness.industryWeighted votingcomputer.software_genreMachine learningExpert systemMultiple dataMatrix (mathematics)Information extractionComputingMethodologies_PATTERNRECOGNITIONKnowledge extractionManagement systemData miningArtificial intelligencebusinesscomputerClassifier (UML)Data Mining and Knowledge Discovery: Theory, Tools, and Technology
researchProduct

Bagging and Boosting with Dynamic Integration of Classifiers

2000

One approach in classification tasks is to use machine learning techniques to derive classifiers using learning instances. The co-operation of several base classifiers as a decision committee has succeeded to reduce classification error. The main current decision committee learning approaches boosting and bagging use resampling with the training set and they can be used with different machine learning techniques which derive base classifiers. Boosting uses a kind of weighted voting and bagging uses equal weight voting as a combining method. Both do not take into account the local aspects that the base classifiers may have inside the problem space. We have proposed a dynamic integration tech…

Boosting (machine learning)Training setbusiness.industryComputer sciencemedia_common.quotation_subjectWeighted votingMachine learningcomputer.software_genreBoosting methods for object categorizationRandom subspace methodComputingMethodologies_PATTERNRECOGNITIONEnsembles of classifiersVotingAdaBoostArtificial intelligenceGradient boostingbusinesscomputermedia_common
researchProduct

Handling local concept drift with dynamic integration of classifiers : domain of antibiotic resistance in nosocomial infections

2006

In the real world concepts and data distributions are often not stable but change with time. This problem, known as concept drift, complicates the task of learning a model from data and requires special approaches, different from commonly used techniques, which treat arriving instances as equally important contributors to the target concept. Among the most popular and effective approaches to handle concept drift is ensemble learning, where a set of models built over different time periods is maintained and the best model is selected or the predictions of models are combined. In this paper we consider the use of an ensemble integration technique that helps to better handle concept drift at t…

Concept driftbusiness.industryComputer scienceWeighted votingcomputer.software_genreMachine learningEnsemble learningDomain (software engineering)Task (project management)Set (abstract data type)Artificial intelligenceData miningbusinesscomputer
researchProduct